Skip to content

AIP-76: Consume task-emitted partition keys on asset events#66782

Open
anishgirianish wants to merge 2 commits into
apache:mainfrom
anishgirianish:aip-76-partition-at-runtime-server-consumer
Open

AIP-76: Consume task-emitted partition keys on asset events#66782
anishgirianish wants to merge 2 commits into
apache:mainfrom
anishgirianish:aip-76-partition-at-runtime-server-consumer

Conversation

@anishgirianish
Copy link
Copy Markdown
Contributor

@anishgirianish anishgirianish commented May 12, 2026


Was generative AI tooling used to co-author this PR?
  • Yes (please specify the tool below)

Tasks can record per-emission partition keys via outlet_events[asset].add_partitions(...) (shipped in #65447).
Persist each on the matching AssetEvent row, fanning runtime fan-out emissions into one event per key, and back-fill
DagRun.partition_key when the task emitted exactly one key on a run that had none.

closes: #58474
related: #44146 #65300

cc @Lee-W


  • Read the Pull Request Guidelines for more information. Note: commit author/co-author name and email in commits become permanently public when merged.
  • For fundamental code changes, an Airflow Improvement Proposal (AIP) is needed.
  • When adding dependency, check compliance with the ASF 3rd Party License Policy.
  • For significant user-facing changes create newsfragment: {pr_number}.significant.rst, in airflow-core/newsfragments. You can add this file in a follow-up commit after the PR is created so you know the PR number.

Comment thread airflow-core/src/airflow/models/taskinstance.py Outdated
Comment thread airflow-core/src/airflow/models/taskinstance.py
Comment thread airflow-core/src/airflow/models/taskinstance.py
@anishgirianish anishgirianish force-pushed the aip-76-partition-at-runtime-server-consumer branch from 40b2e65 to 5d12baa Compare May 13, 2026 18:14
@anishgirianish anishgirianish requested a review from Lee-W May 13, 2026 18:17
@anishgirianish
Copy link
Copy Markdown
Contributor Author

Hi @Lee-W, thank you so much for the review. I have addressed the feedback in the latest push. I would like to request you for your re-review whenever you get a chance. Thank you.

Comment on lines +1486 to +1512
payloads_by_asset: dict[SerializedAssetUniqueKey, list[OutletEventPayload]] = defaultdict(list)
for outlet_event in outlet_events:
# Alias-emitted events are handled separately further down via
# register_asset_change_for_alias, which uses the DagRun-level
# partition_key. Per-emission partition keys do not fan out through
# the alias path — emission via an alias produces one event per
# resolved asset, all carrying the same dag_run_partition_key.
if "source_alias_name" in outlet_event:
continue
asset_key = SerializedAssetUniqueKey(**outlet_event["dest_asset_key"])
payloads_by_asset[asset_key].append(
OutletEventPayload(
extra=outlet_event["extra"], partition_key=outlet_event.get("partition_key")
)
)

# Back-fill DagRun.partition_key from the task emission when the task
# emitted exactly one distinct partition_key across all outlet events
# and the DagRun did not already have one set. This lets a task that
# discovers the partition at runtime (rather than via params) act as
# the source of truth for the DagRun-level key.
runtime_pks: set[str] = {
payload.partition_key
for payloads in payloads_by_asset.values()
for payload in payloads
if payload.partition_key is not None
}
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
payloads_by_asset: dict[SerializedAssetUniqueKey, list[OutletEventPayload]] = defaultdict(list)
for outlet_event in outlet_events:
# Alias-emitted events are handled separately further down via
# register_asset_change_for_alias, which uses the DagRun-level
# partition_key. Per-emission partition keys do not fan out through
# the alias path — emission via an alias produces one event per
# resolved asset, all carrying the same dag_run_partition_key.
if "source_alias_name" in outlet_event:
continue
asset_key = SerializedAssetUniqueKey(**outlet_event["dest_asset_key"])
payloads_by_asset[asset_key].append(
OutletEventPayload(
extra=outlet_event["extra"], partition_key=outlet_event.get("partition_key")
)
)
# Back-fill DagRun.partition_key from the task emission when the task
# emitted exactly one distinct partition_key across all outlet events
# and the DagRun did not already have one set. This lets a task that
# discovers the partition at runtime (rather than via params) act as
# the source of truth for the DagRun-level key.
runtime_pks: set[str] = {
payload.partition_key
for payloads in payloads_by_asset.values()
for payload in payloads
if payload.partition_key is not None
}
payloads_by_asset: dict[SerializedAssetUniqueKey, list[OutletEventPayload]] = defaultdict(list)
runtime_pks: set[str] = set()
for outlet_event in outlet_events:
if "source_alias_name" in outlet_event:
continue
asset_key = SerializedAssetUniqueKey(**outlet_event["dest_asset_key"])
partition_key = outlet_event.get("partition_key")
payloads_by_asset[asset_key].append(
OutletEventPayload(extra=outlet_event["extra"], partition_key=partition_key)
)
if partition_key is not None:
runtime_pks.add(partition_key)

Comment on lines +1559 to +1561
partition_key=payload.partition_key
if payload.partition_key is not None
else dag_run_partition_key,
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why would it be None and fallback to the DagRun one?

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I kinda feel the user should be responsible for providing the partitions if they want to do a runtime one 🤔

WDYT? or is there use cases we need to fallback to DagRun?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Manipulate partitioned asset events from task context

2 participants